Closed
Bug 1460777
Opened 7 years ago
Closed 7 years ago
Obtain GCC toolchain dependencies from task artifacts / implement "fetch" tasks
Categories
(Firefox Build System :: Task Configuration, task)
Tracking
(firefox62 fixed)
RESOLVED
FIXED
mozilla62
Tracking | Status | |
---|---|---|
firefox62 | --- | fixed |
People
(Reporter: gps, Assigned: gps)
References
(Blocks 1 open bug)
Details
Attachments
(3 files)
We currently download GCC toolchain dependencies from the Internets. This is brittle.
I'll explain more in the commit message of a patch I'm about to submit...
Comment hidden (mozreview-request) |
Assignee | ||
Comment 2•7 years ago
|
||
Honestly, I'll be surprised if I'm granted r+ on the initial review. I very much perceive what I'm doing here as a work in progress. I would not at all be surprised if someone has high-level issues with what's implemented so far. I view this as very much a work in progress. Please take time to read the commit message to understand my thoughts on where things currently stand.
Comment 3•7 years ago
|
||
mozreview-review |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review249076
Without looking too much into the implementation details, here are a few preliminary comments:
- removing GPG validation is not nice. I've thoroughly vetted the gpg keys, which gives a level of confidence when adding *new* archives (can you trust that everyone adding new references to gcc source archives has done due diligence and properly validated the gpg signatures *and* the gpg keys? That's footgun-y. I don't want to have to).
- as you mention in the commit message, ideally, we'd have something like mounts handled at the worker level for all workers. One reason mach toolchain artifact exists is because that's not the case (the other is that mach toolchain artifact is also meant to be used locally, and would have to exist anyways). The next best place for this is run-task. A separate script is kind of suboptimal, although I understand that's a way to unblock the situation (although, how hard is it to move run-task to python 3?)
Attachment #8974862 -
Flags: review?(mh+mozilla)
Assignee | ||
Comment 4•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review249076
I see your point regarding GPG keys and vetting *new* archives. I was hoping we could get rid of it because it is redundant with SHA256 for integrity verification. But, yeah, establishing a chain of trust when we import new artifacts is a good idea.
I should be able to add a mechanism to the YAML to define GPG keys/signatures for fetched content.
Regarding porting run-task to Python 3, I have patches and I /think/ a working try push. Expect those to go up for review on bug 1460470 shortly. Although I was planning to keep Python 2.7 support in run-task for the short term and then merge `fetch-content` in later once it is Python 3 only. But if all in-tree consumers have Python 3, I could be convinced to drop support for 2.7 as part of the port. Let's discuss in bug 1460470.
Assignee | ||
Comment 5•7 years ago
|
||
pmoore, jonas: I think you should read at least the commit message (and possibly the patch) to get an understanding of some of the things we're having to do in the Firefox CI world to effectively work around limitations in docker-worker and other aspects of TC.
Flags: needinfo?(pmoore)
Flags: needinfo?(jopsen)
Assignee | ||
Comment 6•7 years ago
|
||
Regarding GPG validation, a concern with the use of GPG is it is not reproducible over time.
GPG keys have an expiration time. And once you are past the expiration time of a key, GPG refuses to verify something signed with that key. AFAIK there is no way to override GPG's defaults and have it verify using an expired key. This means that if artifacts requiring GPG verification expire, we may be unable to run that task successfully, even if the underlying content is available.
I totally grok wanting to use GPG to validate content. But I think actively using GPG in tasks is dangerous because it constitutes a time bomb.
I think the place we should be performing GPG verification is on initial import of content. i.e. we should have a tool to download GCC, etc files, validate their GPG signatures, then commit the content hash as part of taskgraph.
Also, CCIng Aki because I suspect he cares about such things.
Comment 7•7 years ago
|
||
We're currently planning on moving to an openssl CA model for cotvN. I also have been hinting whenever possible that having platform-supported artifact verification would vastly simplify what Chain of Trust verification needs to cover.
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Assignee | ||
Comment 10•7 years ago
|
||
GPG signature verification now implemented.
https://treeherder.mozilla.org/#/jobs?repo=try&revision=017fa59be19a9b015880c5c1e9a0e798f9c9e5c0 is a Try push demonstrating everything in action.
Assignee | ||
Comment 11•7 years ago
|
||
(In reply to Aki Sasaki [:aki] from comment #7)
> We're currently planning on moving to an openssl CA model for cotvN. I also
> have been hinting whenever possible that having platform-supported artifact
> verification would vastly simplify what Chain of Trust verification needs to
> cover.
If this means that tasks would automatically download the signed chain-of-trust manifest, verify it, then automatically verify the integrity of fetched artifacts listed within the chain-of-trust manifest as they are fetched, then I'm 100% on board with this plan. Furthermore, I think artifacts should be supplemental "edges" in the task graph and that workers should fetch those artifacts from dependent tasks automatically.
Comment 12•7 years ago
|
||
For the record, if people would stop filing bugs and feature requests for docker-worker, we could do more work on replacing it with docker-engine, which does have features like this. I believe taskcluter-worker already supports `mounts` fetching from artifacts, but if not, that's the right place to add it.
Comment 13•7 years ago
|
||
(In reply to Dustin J. Mitchell [:dustin] pronoun: he from comment #12)
> For the record, if people would stop filing bugs and feature requests for
> docker-worker, we could do more work on replacing it with docker-engine,
> which does have features like this. I believe taskcluter-worker already
> supports `mounts` fetching from artifacts, but if not, that's the right
> place to add it.
I disagree - we should encourage users to file bugs; we can always resolve them as WONTFIX with an explanation this will be, or is already, implemented in another worker. This way the requirement is tracked, together with reasoning/justification of if and where it will be (or was already) implemented.
Comment 14•7 years ago
|
||
This is pretty cool, one option is to use tooltool instead of rolling yet another tooltool :)
But I see the upside in this being controlled in-tree, it's easy to review. And it's easy to get an overview
of what we use, store and where we got it from.
> I believe taskcluter-worker already supports `mounts` fetching from artifacts, but if not, that's the right place to add it.
Correct, the docker-engine for tc-worker is less battle tested at the moment, but already has support for pre-loaded cache volumes, and shared read-only pre-load cache volumes.
Flags: needinfo?(jopsen)
Comment 15•7 years ago
|
||
I totally agree with finding a solution to this age-old problem which still haunts us.
It feels like an intermediate caching proxy could play a big part in helping simplify the problem, in that
1) content gets cached, so we don't depend on third party up-time
2) we get E-Tags and caching directives for free
3) we can manage content expiry
I'm wondering if a well-configured http caching proxy might allow customisation of http caching headers of the proxied content, such as setting E-Tag as SHA256, so that we don't need to build too many more concerns into the task framework. For example, if the caching proxy can be configured always to returns E-Tag as SHA256 of content, tasks could log/validate the E-Tag header for external content. (Note the http specification is relatively flexible on what the E-Tag is or what it contains, it just has to be a unique identifier)
For content from artifacts, workers will soon be uploading and validating SHA256 natively (thanks to the sterling work jhford has done with the new blob storage type). For validating third party content downloaded from a raw URL, we still need the ability to specify a SHA256 for it, either in the task itself, or the task payload via a worker feature.
It feels like task complexity is already pretty high, so it would be nice if we can do as much of this as possible with existing standards (like http caching, tooltool, ...), if it helps to avoid creating more task graph complexity.
Flags: needinfo?(pmoore)
Assignee | ||
Comment 17•7 years ago
|
||
While I think a caching HTTP proxy could be useful in many scenarios, for Firefox CI, we want our tasks to be hermetic and therefore reproducible over time. *All* network services intrinsically violate this property. So a caching HTTP proxy papers over the underlying problem and suffers from the same limitations (e.g. once content disappears from the origin server, you can't get it again).
The world I'm trying to trend us towards is one where *all* dependencies are available as artifacts from dependent tasks. In the case where we do need to obtain something like a source archive from a 3rd party server, we either vendor those things in a highly-available service with no purge policy (e.g. tooltool) or we create tasks that effectively do the same thing as task artifacts. Dustin's suggestion in bug 1460943 was to set the artifact expiration to something like the year 3000. I may do that for the "fetch" tasks I've devised in the patches on this bug. Maybe as a follow-up though. I recognize that this is a somewhat contentious topic and I'd very much like to land something that is strictly better than status quo, even if it may not be perfect (where "perfect" here in my mind means "hermetic").
Comment 18•7 years ago
|
||
Greg, so that you know, we're working on building a generic "Object Storage" system for taskcluster, with which we'll store artifacts. Basically, we're taking the complex bits of how to upload, store, mirror and serve large objects in a CI world out of our Queue and putting them into a dedicated service. This includes doing things like storing sha256 hashes, size and content-encoding. This project also involves having a set of libraries and a CLI tool which makes interacting with the service a trivial proposition. It will also handle cross-datacenter replication, as well as forcing the validity of the supplied sha256 values wherever possible (based on underlying storage system).
For the storage of artifacts, we'd have the Queue as a front-end to this Object Storage system. The Queue will manage the namespace, but defer all of the operations to the Object Storage system. One possibility is having a second instance of this Object service with a very small and minimal "toolchains" front-end which would allow developers to store toolchains using the exact same underlying storage system to store toolchains, but using its own dedicated storage accounts. This toolchain frontend could build whichever auth system needed for uploading of toolchains. We could add toolchains using different methods, like a PUT endpoint with a JSON body like {url, sha256} or using the normal object service upload methods directly.
This would let use reuse the code we're already going to write for managing large objects in the Queue for managing large objects for toolchains.
Does this sound interesting at all?
Flags: needinfo?(jhford)
Comment 19•7 years ago
|
||
(In reply to Pete Moore [:pmoore][:pete] from comment #13)
> I disagree - we should encourage users to file bugs; we can always resolve
> them as WONTFIX with an explanation this will be, or is already, implemented
> in another worker. This way the requirement is tracked, together with
> reasoning/justification of if and where it will be (or was already)
> implemented.
I stand corrected. Bugs are fine, but I'm massively in favor of closing those as WONTFIX :)
Comment 20•7 years ago
|
||
mozreview-review |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review249972
::: taskcluster/scripts/misc/fetch-content:39
(Diff revision 2)
> + validated against those requirements and ``IntegrityError`` will be
> + raised if expectations do not match.
> +
> + Because verification cannot occur until the file is completely downloaded
> + it is recommended for consumers to not do anything meaningful with the
> + data if content verification is being used. To securely handle retrieved
"..until it is completely downloaded and the generator has finished successfully"?
::: taskcluster/taskgraph/transforms/job/fetch.py:76
(Diff revision 2)
> + artifact_name = run['url'].split('/')[-1]
> +
> + worker.setdefault('artifacts', []).append({
> + 'type': 'directory',
> + 'name': 'public',
> + 'path': '/builds/worker/artifacts',
I think you'll want to set expires to some unimaginable future date here
::: taskcluster/taskgraph/transforms/job/fetch.py:111
(Diff revision 2)
> +
> + attributes = taskdesc.setdefault('attributes', {})
> + attributes['fetch-artifact'] = 'public/%s' % artifact_name
> +
> + if not taskgraph.fast:
> + cache_name = taskdesc['label'].replace('{}-'.format(config.kind), '', 1)
With no level here, I think we are confident that a try job cannot poison a higher-level job becaues the content hash is verified?
Attachment #8974862 -
Flags: review?(dustin) → review+
Assignee | ||
Updated•7 years ago
|
Blocks: fx-hermetic-ci
Assignee | ||
Updated•7 years ago
|
Summary: Obtain GCC toolchain dependencies from task artifacts → Obtain GCC toolchain dependencies from task artifacts / implement "fetch" tasks
Assignee | ||
Comment 21•7 years ago
|
||
https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=94a9641c5a018cfe729ebe748e75a7c4373e4322 (which is a blocker of this bug) is currently failing (and was backed out) because www.multiprecision.org is currently down. I give up.
Assignee | ||
Comment 22•7 years ago
|
||
(In reply to John Ford [:jhford] CET/CEST Berlin Time from comment #18)
> Does this sound interesting at all?
That all sounds *very* interesting!
If the 2nd "toolchains" instance of this service were basically "store the things for all of time," then I think that would address concerns I have. We would then store all remote hosted content in this long-term "lockbox" (either via special tasks that run once or via a one-off process that developers run when introducing new long-term content - similar to the way tooltool works). All other artifacts that could be derived from version control and the long-term artifacts would be in the "main" storage area and subject to expiration, etc.
If you have links to the design document, I could try to find time to review it if you think that would be helpful.
Thanks for writing the detailed comment!
Comment 23•7 years ago
|
||
mozreview-review |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review251618
I'll note that a lot of your fetch script is duplicating code from tooltool.
::: taskcluster/ci/fetch/toolchains.yml:76
(Diff revision 2)
> + size: 62462388
> + gpg-signature:
> + sig-url: "{url}.sig"
> + key-path: build/unix/build-gcc/13975A70E63C361C73AE69EF6EEB81F8981C74C7.key
> +
> +gmp-4.3.2:
The point of the special cases in the build-gcc.sh script is that we *don't* use gmp 4.3.2, mpfr 2.4.2 or mpc 0.8.1, but use respectively 5.1.3, 3.1.5 and 0.8.2 instead. So we don't need those fetch declarations. Interestingly, you did the right thing in the fetches used by the toolchain tasks, so you just need to remove the unused fetch tasks.
::: taskcluster/scripts/misc/build-gcc-4.9-linux.sh:20
(Diff revision 2)
> gcc_version=4.9.4
> gcc_ext=bz2
> binutils_version=2.25.1
> binutils_ext=bz2
>
> -# GPG key used to sign GCC
> +$HOME_DIR/src/taskcluster/scripts/misc/fetch-content task-artifacts-env MOZ_FETCHES $root_dir
I'd rather go with `fetch-content task-artifacts -d $root_dir $MOZ_FETCHES`
::: taskcluster/scripts/misc/build-gcc-4.9-linux.sh:23
(Diff revision 2)
> binutils_ext=bz2
>
> -# GPG key used to sign GCC
> -$GPG --import $data_dir/13975A70E63C361C73AE69EF6EEB81F8981C74C7.key
> -# GPG key used to sign binutils
> -$GPG --import $data_dir/EAF1C276A747E9ED86210CBAC3126D3B4AE55E93.key
> +$HOME_DIR/src/taskcluster/scripts/misc/fetch-content task-artifacts-env MOZ_FETCHES $root_dir
> +
> +pushd $root_dir/gcc-$gcc_version
> +ln -sf ../cloog-0.18.1 cloog
It kind of sucks that we now have one more location holding the version numbers, compared to before. It also kind of sucks that we lose the benefit of using gcc's contrib/download_prerequisites, which, although an awful hack, ensures we have handled all the gcc dependencies when trying a new version of gcc.
::: taskcluster/scripts/misc/fetch-content:92
(Diff revision 2)
> + try:
> + path.unlink()
> + except FileNotFoundError:
> + pass
> +
> + tmp = path.with_name('%s.tmp' % path.name)
there's a theoretical path encoding problem hidden in here, isn't there?
::: taskcluster/scripts/misc/fetch-content:101
(Diff revision 2)
> + try:
> + with tmp.open('wb') as fh:
> + for chunk in stream_download(url, sha256=sha256, size=size):
> + fh.write(chunk)
> +
> + print('renaming %s -> %s' % (tmp, path))
"renaming foo/bar/baz.tmp -> foo/bar/baz" seems like useless redundancy. "renaming to %s"?
::: taskcluster/scripts/misc/fetch-content:198
(Diff revision 2)
> +
> +def command_static_url(args):
> + gpg_sig_url = args.gpg_sig_url
> + gpg_env_key = args.gpg_key_env
> +
> + if (gpg_sig_url and not gpg_env_key) or (not gpg_sig_url and gpg_env_key):
if bool(gpg_sig_url) != bool(gpg_env_key)
::: taskcluster/taskgraph/transforms/job/fetch.py:111
(Diff revision 2)
> +
> + attributes = taskdesc.setdefault('attributes', {})
> + attributes['fetch-artifact'] = 'public/%s' % artifact_name
> +
> + if not taskgraph.fast:
> + cache_name = taskdesc['label'].replace('{}-'.format(config.kind), '', 1)
I think a level is still needed, only for the fact that we probably want a shorter expiration for try artifacts, and it's better if after actual landing we get a fresh new fetch job with a longer term expiry rather than end up using whatever try artifact there is until it expires, to then have the non-try builds re-trigger a fetch several weeks later, seemingly out of the blue.
Attachment #8974862 -
Flags: review?(mh+mozilla)
Comment 24•7 years ago
|
||
mozreview-review |
Comment on attachment 8975217 [details]
Bug 1460777 - Extract GPG keys to standalone files;
https://reviewboard.mozilla.org/r/243558/#review251628
Attachment #8975217 -
Flags: review?(mh+mozilla) → review+
Assignee | ||
Comment 25•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review251618
> there's a theoretical path encoding problem hidden in here, isn't there?
Can you elaborate?
Comment 26•7 years ago
|
||
(In reply to Gregory Szorc [:gps] from comment #25)
> Comment on attachment 8974862 [details]
> Bug 1460777 - Taskgraph tasks for retrieving remote content;
>
> https://reviewboard.mozilla.org/r/243240/#review251618
>
> > there's a theoretical path encoding problem hidden in here, isn't there?
>
> Can you elaborate?
'%s.tmp' % path.name would do the wrong thing in the theoretical case where the path is not unicode, right?
Assignee | ||
Comment 27•7 years ago
|
||
(In reply to Mike Hommey [:glandium] from comment #26)
> '%s.tmp' % path.name would do the wrong thing in the theoretical case where
> the path is not unicode, right?
This file is Python 3 and all variables are str, which means they are Unicode as far as Python is concerned. Python should apply the default filesystem encoding to convert those Unicode code points back to filenames.
There are probably some corner cases here. But most Python programs (including this one) can probably ignore them.
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment 30•7 years ago
|
||
mozreview-review |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review255644
I'm still concerned by the repetition of version numbers here and there and the fact that we don't "cross-validate" we get the right things wrt contrib/download_prerequisites, but meh.
::: taskcluster/ci/fetch/toolchains.yml:4
(Diff revision 3)
> +binutils-2.25.1:
> + description: binutils 2.25.1 source code
> + treeherder:
> + symbol: binutils2.25.1
maybe wrap the symbols with a S()?
::: taskcluster/ci/fetch/toolchains.yml:33
(Diff revision 3)
> + description: binutils 2.28.1 source code
> + treeherder:
> + symbol: binutils2.28.1
> + run:
> + using: fetch-url
> + url: ftp://ftp.gnu.org/gnu/binutils/binutils-2.28.1.tar.bz2
why not use the xz like in the current scripts?
::: taskcluster/taskgraph/transforms/use_fetches.py:24
(Diff revision 3)
> + if value:
> + dict[key] = value
> +
> +
> +@transforms.add
> +def use_fetches(config, jobs):
There's an open question about whether to detect unused fetches and complain about them. There's the same problem with toolchains, but some toolchains are meant to be consumed by non-taskcluster things (e.g. local developers downloading the mac clang builds).
Attachment #8974862 -
Flags: review?(mh+mozilla) → review+
Assignee | ||
Comment 31•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review255644
I don't like that either.
FWIW, I plan to follow up on this with the ability to "repackage" archives. One of the things that will do is support stripping N leading path components and replacing with something else. Once we do that, we can e.g. replace `gcc-7.4.0` with `gcc`. This will eliminate the need to create the `gcc -> gcc-7.4.0` symlink, which will remove some of the version strings from the `build-*.sh` scripts.
> why not use the xz like in the current scripts?
This is an oversight on my part.
Assignee | ||
Comment 32•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review255644
> There's an open question about whether to detect unused fetches and complain about them. There's the same problem with toolchains, but some toolchains are meant to be consumed by non-taskcluster things (e.g. local developers downloading the mac clang builds).
I'll deal with this in a follow-up. There's still a lot a want to do with these tasks. I was trying to go for minimal viable solution in this bug.
Assignee | ||
Comment 33•7 years ago
|
||
mozreview-review-reply |
Comment on attachment 8974862 [details]
Bug 1460777 - Taskgraph tasks for retrieving remote content;
https://reviewboard.mozilla.org/r/243240/#review255644
> maybe wrap the symbols with a S()?
I'm not sure what group name to use. Since we need to annotate groups to pass lint and since this is purely a cosmetic change, I'm deferring this change to a follow-up.
Comment hidden (mozreview-request) |
Comment hidden (mozreview-request) |
Comment 36•7 years ago
|
||
Pushed by gszorc@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/60ed097650b8
Extract GPG keys to standalone files; r=glandium
https://hg.mozilla.org/integration/autoland/rev/52ef9348401d
Taskgraph tasks for retrieving remote content; r=dustin,glandium
Updated•7 years ago
|
See Also: → mingw-clang
Comment 37•7 years ago
|
||
Backed out 2 changesets (bug 1460777) for Toolchains failure on a CLOSED TREE
Backout link: https://hg.mozilla.org/integration/autoland/rev/6d5d80cfaaae6c787d67ac508eba04d2d77de689
Push with failures: https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=52ef9348401dd27c93b5954b067503ac1634a475
Log link: https://treeherder.mozilla.org/logviewer.html#?job_id=182128354&repo=autoland
Log snippet:
[task 2018-06-06T17:47:43.844Z] + export TMPDIR=/tmp/
[task 2018-06-06T17:47:43.844Z] + TMPDIR=/tmp/
[task 2018-06-06T17:47:43.844Z] + export gcc_bindir=/builds/worker/workspace/build/src/gcc/bin
[task 2018-06-06T17:47:43.844Z] + gcc_bindir=/builds/worker/workspace/build/src/gcc/bin
[task 2018-06-06T17:47:43.844Z] + export gmp_prefix=/tools/gmp
[task 2018-06-06T17:47:43.844Z] + gmp_prefix=/tools/gmp
[task 2018-06-06T17:47:43.844Z] + export gmp_dir=/builds/worker/workspace/build/tools/gmp
[task 2018-06-06T17:47:43.844Z] + gmp_dir=/builds/worker/workspace/build/tools/gmp
[task 2018-06-06T17:47:43.844Z] + prepare_sixgill
[task 2018-06-06T17:47:43.844Z] + cd /builds/worker/workspace/build
[task 2018-06-06T17:47:43.844Z] + hg clone -r ab06fc42cf0f https://hg.mozilla.org/users/sfink_mozilla.com/sixgill
[task 2018-06-06T17:47:51.260Z]
[task 2018-06-06T17:47:52.263Z] files [===> ] 97/1195 1m06s
[task 2018-06-06T17:47:52.543Z] files [======================================> ] 830/1195 03s
[task 2018-06-06T17:47:52.553Z]
[task 2018-06-06T17:47:52.553Z] destination directory: sixgill
[task 2018-06-06T17:47:52.553Z] adding changesets
[task 2018-06-06T17:47:52.553Z] adding manifests
[task 2018-06-06T17:47:52.553Z] adding file changes
[task 2018-06-06T17:47:52.553Z] added 231 changesets with 2230 changes to 1195 files
[task 2018-06-06T17:47:52.598Z] new changesets 97c3858e4994:ab06fc42cf0f
[task 2018-06-06T17:47:52.598Z] updating to branch default
[task 2018-06-06T17:47:52.702Z] 1153 files updated, 0 files merged, 0 files removed, 0 files unresolved
[task 2018-06-06T17:47:52.717Z] + build_gmp
[task 2018-06-06T17:47:52.717Z] + '[' -x /builds/worker/workspace/build/src/gcc/bin/gcc ']'
[task 2018-06-06T17:47:52.717Z] + mkdir /builds/worker/workspace/build/gmp-objdir
[task 2018-06-06T17:47:52.718Z] + cd /builds/worker/workspace/build/gmp-objdir
[task 2018-06-06T17:47:52.718Z] + /builds/worker/workspace/build/gcc-6.4.0/gmp/configure --disable-shared --with-pic --prefix=/tools/gmp
[task 2018-06-06T17:47:52.718Z] workspace/build/src/taskcluster/scripts/misc/build-gcc-sixgill-plugin-linux.sh: line 65: /builds/worker/workspace/build/gcc-6.4.0/gmp/configure: No such file or directory
[taskcluster 2018-06-06 17:47:53.772Z] === Task Finished ===
[taskcluster 2018-06-06 17:47:54.633Z] Unsuccessful task run with exit code: 1 completed in 221.657 seconds
Flags: needinfo?(gps)
Comment 38•7 years ago
|
||
Pushed by gszorc@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/e54687f110b1
Extract GPG keys to standalone files; r=glandium
https://hg.mozilla.org/integration/autoland/rev/90dca0906337
Taskgraph tasks for retrieving remote content; r=dustin, glandium
Comment 39•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/e54687f110b1
https://hg.mozilla.org/mozilla-central/rev/90dca0906337
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
status-firefox62:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla62
Assignee | ||
Updated•7 years ago
|
Flags: needinfo?(gps)
Comment 40•7 years ago
|
||
Pushed by mozilla@hocat.ca:
https://hg.mozilla.org/comm-central/rev/41a1e39051d2
Port Bug 1460777: Taskgraph tasks for retrieving remote content; rs=bustage-fix
Updated•7 years ago
|
Comment 41•7 years ago
|
||
I've updated this for esr60 at https://hg.mozilla.org/users/mozilla_hocat.ca/esr60-stage/rev/54620248e615. :gps, could you have a look and, in particular, check that I've properly adjusted things to reflect the toolchains in esr60?
Flags: needinfo?(gps)
Assignee | ||
Comment 42•7 years ago
|
||
(In reply to Tom Prince [:tomprince] (limited availability Jul 16-29) from comment #41)
> I've updated this for esr60 at
> https://hg.mozilla.org/users/mozilla_hocat.ca/esr60-stage/rev/54620248e615.
> :gps, could you have a look and, in particular, check that I've properly
> adjusted things to reflect the toolchains in esr60?
That URL says revision not found. I'm guessing you obsoleted it? So, sadly, I can't review what I cannot see.
Flags: needinfo?(gps)
Comment 43•7 years ago
|
||
:gps,
Sorry about that, and for the delay. The commit is https://hg.mozilla.org/users/mozilla_hocat.ca/esr60-stage/rev/dd9f2b4da522 and attached.
Attachment #9005086 -
Flags: feedback?(gps)
Assignee | ||
Comment 44•7 years ago
|
||
Comment on attachment 9005086 [details] [diff] [review]
esr60-uplift.patch
This seems reasonable!
Attachment #9005086 -
Flags: feedback?(gps) → feedback+
Updated•6 years ago
|
Version: Version 3 → 3 Branch
You need to log in
before you can comment on or make changes to this bug.
Description
•